58 research outputs found

    Grounding the Interaction : Knowledge Management for Interactive Robots

    Get PDF
    Avec le développement de la robotique cognitive, le besoin d’outils avancés pour représenter, manipuler, raisonner sur les connaissances acquises par un robot a clairement été mis en avant. Mais stocker et manipuler des connaissances requiert tout d’abord d’éclaircir ce que l’on nomme connaissance pour un robot, et comment celle-ci peut-elle être représentée de manière intelligible pour une machine. \ud \ud Ce travail s’efforce dans un premier temps d’identifier de manière systématique les besoins en terme de représentation de connaissance des applications robotiques modernes, dans le contexte spécifique de la robotique de service et des interactions homme-robot. Nous proposons une typologie originale des caractéristiques souhaitables des systèmes de représentation des connaissances, appuyée sur un état de l’art détaillé des outils existants dans notre communauté. \ud \ud Dans un second temps, nous présentons en profondeur ORO, une instanciation particulière d’un système de représentation et manipulation des connaissances, conçu et implémenté durant la préparation de cette thèse. Nous détaillons le fonctionnement interne du système, ainsi que son intégration dans plusieurs architectures robotiques complètes. Un éclairage particulier est donné sur la modélisation de la prise de perspective dans le contexte de l’interaction, et de son interprétation en terme de théorie de l’esprit. \ud \ud La troisième partie de l’étude porte sur une application importante des systèmes de représentation des connaissances dans ce contexte de l’interaction homme-robot : le traitement du dialogue situé. Notre approche et les algorithmes qui amènent à l’ancrage interactif de la communication verbale non contrainte sont présentés, suivis de plusieurs expériences menées au Laboratoire d’Analyse et d’Architecture des Systèmes au CNRS à Toulouse, et au groupe Intelligent Autonomous System de l’université technique de Munich. Nous concluons cette thèse sur un certain nombre de considérations sur la viabilité et l’importance d’une gestion explicite des connaissances des agents, ainsi que par une réflexion sur les éléments encore manquant pour réaliser le programme d’une robotique “de niveau humain”.-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------With the rise of the so-called cognitive robotics, the need of advanced tools to store, manipulate, reason about the knowledge acquired by the robot has been made clear. But storing and manipulating knowledge requires first to understand what the knowledge itself means to the robot and how to represent it in a machine-processable way. \ud \ud This work strives first at providing a systematic study of the knowledge requirements of modern robotic applications in the context of service robotics and human-robot interaction. What are the expressiveness requirement for a robot? what are its needs in term of reasoning techniques? what are the requirement on the robot's knowledge processing structure induced by other cognitive functions like perception or decision making? We propose a novel typology of desirable features for knowledge representation systems supported by an extensive review of existing tools in our community. \ud \ud In a second part, the thesis presents in depth a particular instantiation of a knowledge representation and manipulation system called ORO, that has been designed and implemented during the preparation of the thesis. We elaborate on the inner working of this system, as well as its integration into several complete robot control stacks. A particular focus is given to the modelling of agent-dependent symbolic perspectives and their relations to theories of mind. \ud \ud The third part of the study is focused on the presentation of one important application of knowledge representation systems in the human-robot interaction context: situated dialogue. Our approach and associated algorithms leading to the interactive grounding of unconstrained verbal communication are presented, followed by several experiments that have taken place both at the Laboratoire d'Analyse et d'Architecture des Systèmes at CNRS, Toulouse and at the Intelligent Autonomous System group at Munich Technical University. \ud \ud The thesis concludes on considerations regarding the viability and importance of an explicit management of the agent's knowledge, along with a reflection on the missing bricks in our research community on the way towards "human level robots". \ud \u

    A Few AI Challenges Raised while Developing an Architecture for Human-Robot Cooperative Task Achievement

    Get PDF
    Over the last five years, and while developing an architecture for autonomous service robots in human environments, we have identified several key decisional issues that are to be tackled for a cognitive robot to share space and tasks with a human. We introduce some of them here: situation assessment and mutual modelling, management and exploitation of each agent (human and robot) knowledge in separate cognitive models, natural multi-modal communication, "human-aware" task planning, and human and robot interleaved plan achievement. As a general "take home" message, it appears that explicit knowledge management, both symbolic and geometric, proves to be a successful key while attempting to address these challenges, as it pushes for a different, more semantic way to address the decision-making issue in human-robot interactions

    Generating spatial referring expressions in a social robot: Dynamic vs. non-ambiguous

    Get PDF
    Generating spatial referring expressions is key to allowing robots to communicate with people in an environment. The focus of most algorithms for generation is to create a non-ambiguous description, and how best to deal with the combination explosion this can create in a complex environment. However, this is not how people naturally communicate. Humans tend to give an under-specified description and then rely on a strategy of repair to reduce the number of possible locations or objects until the correct one is identified, what we refer to here as a dynamic description. We present here a method for generating these dynamic descriptions for Human Robot Interaction, using machine learning to generate repair statements. We also present a study with 61 participants in a task on object placement. This task was presented in a 2D environment that favored a non-ambiguous description. In this study we demonstrate that our dynamic method of communication can be more efficient for people to identify a location compared to one that is non-ambiguous

    Bio-Inspired Grasping Controller for Sensorized 2-DoF Grippers

    Full text link
    We present a holistic grasping controller, combining free-space position control and in-contact force-control for reliable grasping given uncertain object pose estimates. Employing tactile fingertip sensors, undesired object displacement during grasping is minimized by pausing the finger closing motion for individual joints on first contact until force-closure is established. While holding an object, the controller is compliant with external forces to avoid high internal object forces and prevent object damage. Gravity as an external force is explicitly considered and compensated for, thus preventing gravity-induced object drift. We evaluate the controller in two experiments on the TIAGo robot and its parallel-jaw gripper proving the effectiveness of the approach for robust grasping and minimizing object displacement. In a series of ablation studies, we demonstrate the utility of the individual controller components

    A review: Can robots reshape K-12 STEM education?

    Get PDF
    Can robots in classroom reshape K-12 STEM education, and foster new ways of learning? To sketch an answer, this article reviews, side-by-side, existing literature on robot-based learning activities featuring mathematics and physics (purposefully putting aside the well-studied field of "robots to teach robotics") and existing robot platforms and toolkits suited for classroom environment (in terms of cost, ease of use, orchestration load for the teacher, etc.). Our survey suggests that the use of robots in classroom has indeed moved from purely technology to education, to encompass new didactic fields. We however identified several shortcomings, in terms of robotic platforms and teaching environments, that contribute to the limited presence of robotics in existing curricula; the lack of specific teacher training being likely pivotal. Finally, we propose an educational framework merging the tangibility of robots with the advanced visibility of augmented reality

    Learning by teaching a robot: The case of handwriting

    Get PDF
    © 1994-2011 IEEE. Thomas (all children's names have been changed) is five and a half years old and has been diagnosed with visuoconstructive deficits. He is under the care of an occupational therapist and tries to work around his inability to draw letters in a consistent manner. Vincent is six and struggles at school with his poor handwriting and even poorer self-confidence. Whereas Thomas is lively and always quick at shifting his attention from one activity to another, Vincent is shy and poised. Two very different children, each one facing the same difficulty to write in a legible manner. Additionally, hidden beyond these impaired skills, psychosocial difficulties arise: they underperform at school. Thomas has to go for follow-up visits every week, and they both live under the label of «special care.» This is a source of anxiety for the children and for their parents alike

    The Cognitive Correlates of Anthropomorphism

    Get PDF
    While anthropomorphism in human-robot interaction is often discussed, it still appears to lack formal grounds. We recently proposed a first model of the dynamics of anthropomorphism that reflects the evolution of anthropomorphism in the human-robot interaction over time. The model also accounts for non-monotonic effects like the so-called novelty effect. This contribution proposes to build upon this model to investigate the cognitive correlates induced by a sustained human-robot interaction and we present here our initial ideas. We propose to distinguish three cognitive phases: pre-cognitive, familiarity-based, and adapted anthropomorphism, and we outline how these phases relate to the phenomenological evolution of anthropomorphism over time
    • …
    corecore